Recently, graph neural networks have been gaining a lot of attention to simulate dynamical systems due to their inductive nature leading to zero-shot generalizability. Similarly, physics-informed inductive biases in deep-learning frameworks have been shown to give superior performance in learning the dynamics of physical systems. There is a growing volume of literature that attempts to combine these two approaches. Here, we evaluate the performance of thirteen different graph neural networks, namely, Hamiltonian and Lagrangian graph neural networks, graph neural ODE, and their variants with explicit constraints and different architectures. We briefly explain the theoretical formulation highlighting the similarities and differences in the inductive biases and graph architecture of these systems. We evaluate these models on spring, pendulum, gravitational, and 3D deformable solid systems to compare the performance in terms of rollout error, conserved quantities such as energy and momentum, and generalizability to unseen system sizes. Our study demonstrates that GNNs with additional inductive biases, such as explicit constraints and decoupling of kinetic and potential energies, exhibit significantly enhanced performance. Further, all the physics-informed GNNs exhibit zero-shot generalizability to system sizes an order of magnitude larger than the training system, thus providing a promising route to simulate large-scale realistic systems.
translated by 谷歌翻译
Cement is the most used construction material. The performance of cement hydrate depends on the constituent phases, viz. alite, belite, aluminate, and ferrites present in the cement clinker, both qualitatively and quantitatively. Traditionally, clinker phases are analyzed from optical images relying on a domain expert and simple image processing techniques. However, the non-uniformity of the images, variations in the geometry and size of the phases, and variabilities in the experimental approaches and imaging methods make it challenging to obtain the phases. Here, we present a machine learning (ML) approach to detect clinker microstructure phases automatically. To this extent, we create the first annotated dataset of cement clinker by segmenting alite and belite particles. Further, we use supervised ML methods to train models for identifying alite and belite regions. Specifically, we finetune the image detection and segmentation model Detectron-2 on the cement microstructure to develop a model for detecting the cement phases, namely, Cementron. We demonstrate that Cementron, trained only on literature data, works remarkably well on new images obtained from our experiments, demonstrating its generalizability. We make Cementron available for public use.
translated by 谷歌翻译
Lagrangian和Hamiltonian神经网络(分别是LNN和HNN)编码强诱导偏见,使它们能够显着优于其他物理系统模型。但是,到目前为止,这些模型大多仅限于简单的系统,例如摆和弹簧或单个刚体的身体,例如陀螺仪或刚性转子。在这里,我们提出了一个拉格朗日图神经网络(LGNN),可以通过利用其拓扑来学习刚体的动态。我们通过学习以刚体为刚体的棒的绳索,链条和桁架的动力学来证明LGNN的性能。 LGNN还表现出普遍性 - 在链条上训练了一些细分市场的LGNN具有概括性,以模拟具有大量链接和任意链路长度的链条。我们还表明,LGNN可以模拟看不见的混合动力系统,包括尚未接受过培训的酒吧和链条。具体而言,我们表明LGNN可用于建模复杂的现实世界结构的动力学,例如紧张结构的稳定性。最后,我们讨论了质量矩阵的非对角性性质及其在复杂系统中概括的能力。
translated by 谷歌翻译
具有基于物理的诱导偏见的神经网络,例如拉格朗日神经网络(LNN)和汉密尔顿神经网络(HNN),通过编码强诱导性偏见来学习物理系统的动态。另外,还显示出适当的感应偏见的神经odes具有相似的性能。但是,当这些模型应用于基于粒子的系统时,本质上具有转导性,因此不会推广到大型系统尺寸。在本文中,我们提出了基于图的神经ode gnode,以了解动力学系统的时间演变。此外,我们仔细分析了不同电感偏差对GNODE性能的作用。我们表明,与LNN和HNN类似,对约束进行编码可以显着提高GNODE的训练效率和性能。我们的实验还评估了该模型最终性能的其他归纳偏差(例如纽顿第三定律)的价值。我们证明,诱导这些偏见可以在能量违规和推出误差方面通过数量级来增强模型的性能。有趣的是,我们观察到,经过最有效的电感偏见训练的GNODE,即McGnode,优于LNN和HNN的图形版本,即Lagrangian Graph Networks(LGN)和Hamiltonian Graph网络(HGN)在能量侵犯的方面差异,该图表的差异大约是能量侵犯网络(HGN)摆钟系统的4个数量级,春季系统的数量级约为2个数量级。这些结果表明,可以通过诱导适当的电感偏见来获得基于节点的系统的能源保存神经网络的竞争性能。
translated by 谷歌翻译
物理系统通常表示为粒子的组合,即控制系统动力学的个体动力学。但是,传统方法需要了解几个抽象数量的知识,例如推断这些颗粒动力学的能量或力量。在这里,我们提出了一个框架,即拉格朗日图神经网络(LGNN),它提供了强烈的感应偏见,可以直接从轨迹中学习基于粒子系统的拉格朗日。我们在具有约束和阻力的挑战系统上测试我们的方法 - LGNN优于诸如前馈拉格朗日神经网络(LNN)等基线,其性能提高。我们还通过模拟系统模拟系统的两个数量级比受过训练的一个数量级和混合系统大的数量级来显示系统的零弹性通用性,这些数量级是一个独特的功能。与LNN相比,LGNN的图形体系结构显着简化了学习,其性能在少量少量数据上的性能高25倍。最后,我们显示了LGNN的解释性,该解释性直接提供了对模型学到的阻力和约束力的物理见解。因此,LGNN可以为理解物理系统的动力学提供纯粹的填充,这纯粹是从可观察的数量中。
translated by 谷歌翻译
KB为科学领域的策划中的关键组成部分是从域已发表的文章中的表中提取信息 - 表具有重要的信息(通常是数字),必须充分提取该信息,以便对文章进行全面的机器理解。现有表提取器假设表结构和格式的先验知识,这在科学表中可能不知道。我们研究了一个具体而具有挑战性的表提取问题:提取材料的组成(例如玻璃,合金)。我们首先观察到材料科学研究人员在各种桌子样式中组织了类似的组成,需要一个智能模型来理解和构图提取。因此,我们将这项新颖的任务定义为ML社区的挑战,并创建一个培训数据集,其中包括4,408个远距离监督的表格,以及1,475个手动注释的DEV和测试表。我们还提出了Discomat,这是针对此特定任务的强大基线,该基线将多个图形神经网络与多个特定于任务的正则表达式,功能和约束结合在一起。我们表明,通过大幅度的边缘,盘点优于最新的表处理架构。
translated by 谷歌翻译
While the brain connectivity network can inform the understanding and diagnosis of developmental dyslexia, its cause-effect relationships have not yet enough been examined. Employing electroencephalography signals and band-limited white noise stimulus at 4.8 Hz (prosodic-syllabic frequency), we measure the phase Granger causalities among channels to identify differences between dyslexic learners and controls, thereby proposing a method to calculate directional connectivity. As causal relationships run in both directions, we explore three scenarios, namely channels' activity as sources, as sinks, and in total. Our proposed method can be used for both classification and exploratory analysis. In all scenarios, we find confirmation of the established right-lateralized Theta sampling network anomaly, in line with the temporal sampling framework's assumption of oscillatory differences in the Theta and Gamma bands. Further, we show that this anomaly primarily occurs in the causal relationships of channels acting as sinks, where it is significantly more pronounced than when only total activity is observed. In the sink scenario, our classifier obtains 0.84 and 0.88 accuracy and 0.87 and 0.93 AUC for the Theta and Gamma bands, respectively.
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译
The rapid growth of machine translation (MT) systems has necessitated comprehensive studies to meta-evaluate evaluation metrics being used, which enables a better selection of metrics that best reflect MT quality. Unfortunately, most of the research focuses on high-resource languages, mainly English, the observations for which may not always apply to other languages. Indian languages, having over a billion speakers, are linguistically different from English, and to date, there has not been a systematic study of evaluating MT systems from English into Indian languages. In this paper, we fill this gap by creating an MQM dataset consisting of 7000 fine-grained annotations, spanning 5 Indian languages and 7 MT systems, and use it to establish correlations between annotator scores and scores obtained using existing automatic metrics. Our results show that pre-trained metrics, such as COMET, have the highest correlations with annotator scores. Additionally, we find that the metrics do not adequately capture fluency-based errors in Indian languages, and there is a need to develop metrics focused on Indian languages. We hope that our dataset and analysis will help promote further research in this area.
translated by 谷歌翻译
We present, Naamapadam, the largest publicly available Named Entity Recognition (NER) dataset for the 11 major Indian languages from two language families. In each language, it contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location and Organization) for 9 out of the 11 languages. The training dataset has been automatically created from the Samanantar parallel corpus by projecting automatically tagged entities from an English sentence to the corresponding Indian language sentence. We also create manually annotated testsets for 8 languages containing approximately 1000 sentences per language. We demonstrate the utility of the obtained dataset on existing testsets and the Naamapadam-test data for 8 Indic languages. We also release IndicNER, a multilingual mBERT model fine-tuned on the Naamapadam training set. IndicNER achieves the best F1 on the Naamapadam-test set compared to an mBERT model fine-tuned on existing datasets. IndicNER achieves an F1 score of more than 80 for 7 out of 11 Indic languages. The dataset and models are available under open-source licenses at https://ai4bharat.iitm.ac.in/naamapadam.
translated by 谷歌翻译